Crate caffe2_operator

source ·

Macros

  • | A helper macro that should ONLY be used | in the operator constructor to check | if needed features are met. If not, throws | the UnsupportedOperatorFeature exception | with the given message. |
  • | Helpers to implement runtime op | polymorphism. Often it’s convenient to make an | op work on different input types (e.g. i32 vs | i64 indices) or special-case it for particular | input size (e.g. ScatterWeightedSum for block | size of 1 doesn’t need to call Eigen). | | DispatchHelper provides compile-time generation | of nested “if” statements, | e.g. DispatchHelper<FixedValues<1, | 4>>::call(this, block_size); unrolls into: | | @code | if (block_size == 1) { | return DoRunWithValue<1>(); | } else if (block_size = 4) { | return DoRunWithValue<4>(); | } else { | return DoRunWithValue<-1>(); | }| @endcode | | DoRunWithValue implementation can use template | arguments to do "if" statements or proxy to | functions in math.h which often provide fixed | size implementation. | | SimilarlyTensorTypes<int32_t, int64_t>(this, | Input(0))` provides branching based on type of | the first input and calls DoRunWithType. | | Note, that the same instance of Op class is | used as the method, not class is templated. We | might consider adding static class-level | polymorphism later. | | Convenient macro USE_DISPATCH_HELPER is | provided for declaring friendship in case | DoRunWithValue or DoRunWithType are declared | non-public.

Structs

  • | AsyncTask represents an asynchronous | execution of a chain of ops. |
  • | Represents the state of AsyncTask execution, | that can be queried with | | IsCompleted/IsFailed. Callbacks | are supported through SetCallback | and are called upon future’s completion. |
  • | This is for transferring tensor data | between C2 and backends. |


  • | Special tag that can be listed in TensorTypes | to denote that a special implementation | in ‘RunWithOtherType’ needs to be called | instead of failing | | Obviously this needs to be the last item | in lists, e.g. | | TensorTypes<float, double, GenericTensorImplementation> |

  • | ———– | @brief | | A helper class to indicate that the gradient | mechanism is not ready. | | This should only be used sparsely when | the gradient does exist, but we have | not implemented it yet and are using | this as a lazy excuse. Eventually, a | gradient operator should be implemented. |
  • | A struct that holds the gradient operators | and related gradient maps. |
  • | ———– | @brief | | A struct that abstracts on top of dense | and sparse blobs. | | For a dense blob, its gradient name should | be written into dense_, and for a sparse | blob, its gradient name should be written | into indice_ for the sparse indices | and value_ for the values. |








  • | Net is a thin struct that owns all the | operators together with the operator | contexts. |
  • | A net test dummy op that does nothing | but scaffolding. | | Here, we inherit from OperatorStorage | because we instantiate on both CPU and | | GPU. | | In general, you want to only inherit | from Operator. |
  • | A net test dummy op that does nothing but | scaffolding. | | Here, we inherit from OperatorStorage because | we instantiate on both CPU and GPU. | | In general, you want to only inherit from | Operator.

  • | Inherit to make your class observable. |
  • | Use this to implement a Observer using | the Observer Pattern template. |
  • | ———– | @brief | | A class to record the schema of an op. | | OpSchema records the common interface | of an op specified by its name. This is | optional for each operator implemented | in Caffe2 but is strongly recommended. | | To register an OpSchema, one can use | the macro | | OPERATOR_SCHEMA(name) and then append | the various functions in the class. | For example, for an op that takes in two | inputs, one output, and the first input | and output could be in-place, can be | written as | | OPERATOR_SCHEMA(name) | .NumInputs(2) | .NumOutputs(1) | .AllowInplace({{0, 0}}); |
  • | @brief | | A struct to store various cost information | about an operator such as FLOPs, total | memory use and parameters. |
  • | @brief | | A registry to hold all the operator schemas. | | OpSchemaRegistry should not need to | be instantiated. |
  • | Thin class that attaches the observer | to all operators in the net |

  • | This observer displays a description | of each operator executed in a network. | | This includes input and tensors (name, | size, type), arguments, and execution | time. This can be used to analyze different | performance characteristics. | | ———– | @note | | Currently this observer only supports | synchronized computation |
  • | This is the very basic structure you | need to run a network - all it does is simply | to run everything in sequence. | | If you want more fancy control such as | a DAG-like execution, check out other | better net implementations. |
  • | SimpleRefcountNet is an implementation | that adds an additional abstraction | on top of SimpleRefCountNet: it tracks | all the tensors and for those that are | considered internal/temporary, delete | them once their refcount go to zero. | | In the context of a simple static run, | this can be carried out during construction | time: we will do a pass through the network | and track what blobs we need to do reset | on, after the execution of every op. | | To identify which blob is considered | temporary, we employ the following | strategy: any blob that is | | (1) consumed but not produced by ops | in the net, or | | (2) produced but not consumed by ops | in the net, or | | (3) is marked as external_output in | the protobuf will NOT be considered | temporary. | | In the long run, we should design proper | functional interfaces so that nets | are less imperative and more functional. | | Also, for now, SimpleRefCountNet should | only be used for benchmarking purposes | and not product use, since it is not going | to provide better performance gain, | and is implicitly incompatible with | the contract that earlier Nets expose | - that all intermediate blobs are visible | to the users. |
  • | StaticLinkingProtector is a helper | class that ensures that the Caffe2 library | is linked correctly with whole archives | (in the case of static linking). What | happens is that when | | CreateOperator is called for the first | time, it instantiates an OperatorLinkingProtector | object to check if the operator registry | is empty. If it is empty, this means that | we are not properly linking the library. | | You should not need to use this class. |
  • Same as TensorTypes but call DoRunWithType2
  • | ———– | @brief | | A helper class to indicate that the operator | should have no gradient. | | This is used when the operator definition | is designed to not have a gradient. | | Calling a gradient on this operator | def will cause Caffe2 to quit. |
  • | An exception that can be thrown by an | operator constructor that notifies | that it does not support the given setting. | This can be usually used for specific | engines that only implement a subset | of the features required by the original | operator schema. | | TODO(jiayq): make more feature-complete | exception message. |
  • | Workspace is a class that holds all the | related objects created during runtime: | (1) all blobs, and (2) all instantiated | networks. | | It is the owner of all these objects and | deals with the scaffolding logistics. |
  • | ———– | @brief | | A helper class to indicate that the operator | does not need gradient computation. | | Use the macro NO_GRADIENT to register | operators that do not have gradients. | | ———– | @note | | this is different fron SHOULD_NOT_DO_GRADIENT: | the latter means that the gradient computation | should not flow through it at all, and | throws an error if it is called. |

Enums

Constants

Traits

Functions

Type Definitions

Trait Aliases